The Capability Approach to Human Welfare
This post outlines the capability approach to thinking about human welfare. I think that this approach, while very popular in international development, is neglected in EA. While the capability approach has problems, I think that it provides a better approach to thinking about improving human welfare than approaches based on measuring happiness or subjective wellbeing (SWB) or approaches based on preference satisfaction. Finally, even if you disagree that the capability approach is best, I think this post will be useful to you because it may clarify why many people and organizations in the international development or global health space take the positions that they do. I will be drawing heavily on the work of Amartya Sen, but I will often not be citing specific texts because I’m an academic and getting to write without careful citations is thrilling.
This post will have four sections. First, I will describe the capability approach. Second, I will give some simple examples that illustrate why I think that aiming to maximize capabilities[1] is the best way to do good for people. I’ll frame these examples in opposition to other common approaches, but my goal here is mostly constructive and to argue for the capability approach rather than against maximizing, for example, SWB. Third, I will describe what I see as the largest downsides to the capability approach as well as possible responses to these downsides. Fourth and finally, I will explain my weakly-held theory that a lot of the ways that global health or international development organizations, including GiveWell, behave owes to the deep (but often unrecognized) influence of the capability approach on their thought.
The capability approach
The fundamental unit of the value in the capability approach is a functioning, which is anything that you can be or do. Eating is a functioning. Being an EA is a functioning. Other functionings include: being a doctor, running, practicing Judaism, sleeping, and being a parent. Capabilities are options to be or do a functioning.[2] The goal of the capability approach is not to maximize the number of capabilities available to people, it is instead to maximize the number of sets of capabilities. The notion here is that if you maximized simply the number of capabilities then you might enable someone to be: a parent or employed outside the home. But someone might want to do both. If you’re focusing on maximizing the number of sets of capabilities then you’ll end up with: parent, employed, both parent and employed, and neither. The simple beauty of this setup is that it is aiming to maximize the options that people have available to them, from which they then select the group of functionings that they want most. This is why one great book about this approach is entitled “Development as Freedom.” The argument is that development is the process of expanding capabilities, or individual freedom to live the kind of life that you want.
I will come to criticisms later on, but one thing people may note is that this approach will lead to a lot of sets of capabilities and we will need some way to rank them or condense the list. In theory, we would want to do this based on how much people value each capability set. I will discuss this issue in more detail in the third section.
Examples of why I love the capability approach
Here I’ll lay out a few examples that show why I think the capability approach is the best way to think about improving human welfare.
First, in opposition to preference-satisfaction approaches, the capability approach values options not taken. I think this accords with most of our intuitions, and that it takes real work for economics to train it out of people. Here are two examples:
Imagine two children. The first has domineering parents that inform her that she has to grow up to be a doctor. They closely control her school and extracurriculars in order to make this happen, but she doesn’t mind. As it happens she wants to be a doctor, and she will grow up to be a doctor. The second child has parents that tell her she can do what she wants and they broadly support her. She picks the same school and extracurricular options as the first child, and she also grows up to be a doctor. The two children had the same outcomes, and they both were able to select their top options for school, extracurriculars, and career. On most preference-satisfaction approaches they are equally well off. Under the capability approach, however, the second child was much better off as she had so many more options open to her.
Imagine two cities. In one, it is safe for women to walk around at night and in the second it is not. I think the former city is better even if women don’t want to walk around at night, because I think that option is valuable to people even if they do not take it. Preference-satisfaction approaches miss this.
Second, I think that the capability approach gives us a better sense of who is facing deprivation and a better sense of how to prioritize allocating resources/aid than other approaches. I think that this partially has to do with the fact that capabilities are genuine options to do functionings and so they are objective in a way that SWB or happiness is not. For example, you either do or do not have the option of being well nourished. This allows the capability approach to avoid some of the problems related to people possibly having odd functions relating some kind of aid to utility or happiness and therefore becoming either utility monsters or being considered unworthy of aid because the adversity that they face doesn’t translate into SWB or utility.
As a toy example, we can imagine someone that has a low level of capabilities because of discrimination in their society. Such discrimination usually comes along with stories for why it is good or normal—or it’s simply taken for granted—and so it isn’t far-fetched to suggest that someone facing discrimination could have utility or happiness that is as high as people from groups that do not face such discrimination. One could conclude from this that there is nothing to be done, as discrimination doesn’t affect happiness in this case. I find this repugnant, as the group facing discrimination has fewer capabilities. While this is a toy example, it’s not hard to find real cases. For example, in the United States women were happier than men in the 1970s.[3] Does this imply that we should have been focusing more on helping men than women in the 1970s? I doubt it, and my reasoning stems from the fact that it seems to me that 50 years ago women lacked many important capabilities that men had. Using subjective measures to allocate aid means that targeting will depend in part on people’s ability to imagine a better future (and thus feel dissatisfaction with the present). I don’t think conditioning aid on one’s imagination is justified, and so I would prefer measures based on objective criteria such as whether or not one has the ability to do things (eat, vote, send kids to school).
Third, maximizing is (usually) perilous. I think the perilousness of maximizing strongly applies to maximizing money or happiness or SWB, but I actually think maximizing capabilities is a lot safer because they are options. I’ll give two examples, one personal and one that is (hopefully) more fanciful. Personally, I’m nearly always a 7⁄10 or 8⁄10 on the common happiness-type questions. I guess that this means that I could “improve” this, but honestly I’m not trying to do this at all. Worse, if you told me that I was 10⁄10 happy over a long period of time I would be worried about my mental state. I don’t want to be that happy. Like everyone, I’m a confused mess of priorities. I want some happiness, but I also want to work hard on things even if it ends up making me sad. I want to build things both for other people, but also just so that they exist. I had kids even though I knew that in expectation they would make me less happy, and I don’t regret this choice even a little. I want good art to exist literally for its own sake. Sometimes I feel good but then seek out art that makes me so sad that I cry. (Model that!) I want humans to better understand the laws of the universe, regardless of happiness. When I reflect on what I care about, I care about humans becoming wildly capable. Further, I expect that as we increase our options we will all care about different things and maximize different things. This fits the capability approach really well because it is giving us options. This does not fit any approach that seeks to maximize “the one best functioning.”
More fancifully, imagine that we manage to successfully train a god-like AI to actually maximize something.[4] I think if we make that something “happiness” then we’re in big trouble. We’ll all end up on some IV drug drip or the equivalent, and to me that’s a nightmare. However, if we maximize (value-weighted) capabilities then I think we’re in a much better position because, again, these are just options available to people.[5] My point here is not that this solves alignment or whatever, it’s that if you agree that maximizing capabilities is not as fraught as maximizing other things, then that’s a really big deal and strongly suggests that this approach is pointing us in a good direction.
The capability approach is less fraught than others because it is mostly agnostic about what people choose to do with their capabilities. This really matters because (1) we are so diverse, and (2) most optimizers (including us EAs) know less about what each person wants than they do. As a final example of this problem, let’s consider the billions of very religious people alive right now, many of whom live in low-income countries. Do they want to maximize their personal happiness or SWB? The religious people I know do not seem to want to do that. They care, among other things, about their religious community and giving proper respect and glory to God, and they care about the afterlife. As EAs, we should try to give these (and all) people more options to do things that they care about. We should not impose on them our favourite functioning, like having high SWB.
Downsides to the capability approach
While strong in theory, the capability approach has a number of serious downsides that limit how fully it can be implemented in practice. I will describe some of them here, and why I think that despite these downsides the capability approach offers the best conceptual machinery for thinking about how to most improve human welfare.
The first (potential) downside is that the capability approach is highly liberal and anti-paternalistic. It treats individuals as the unit of analysis (groups of people cannot have capabilities) and it assumes that people know best what they want. The goal of policy makers, EAs, aid workers, or some godlike AI is to give people valuable options. People then get to pick for themselves what they actually want to do. If you are not liberal or if you are paternalistic, then you may not like the capability approach.
A second downside is that the number of sets of capabilities is incredibly large, and the value that we would assign to each capability set likely varies quite a bit, making it difficult to cleanly measure what we might optimize for in an EA context. When faced with this problem people do a few things. If you’re Martha Nussbaum you end up making a master list of capabilities and then try to get people to focus on those. This is unattractive to me. If you’re Amartya Sen, you embrace the chaos and try to be pragmatic. Yes, it’s true that people would rank capability sets differently and that they’re very high dimensional, but that’s because life is actually like this. We should not see this and run away to the safety of clean (and surely wrong) simple indices. Instead, we should try to find ways of dealing with this chaos that are approximately right. Here are three examples that start with the theory of the capability approach but then make pragmatic concessions in order to try to be approximately right. My goal in giving these examples is not to say that these are ideal, but just to give illustrations about how people try to start from the capability approach and then move into the realm of practice.
The Human Development Index grew out of a desire to have a country-level index that was inspired by the capability approach. This was from the start going to look odd, as countries cannot have capabilities. The approach ended with a sort of average of country-level scores on education, health, and productivity. There are all kinds of issues with this, but I think it has value relative to an alternative that says “the HDI is confusing or assigns hard-to-defend weights to some dimension, so I will give 100% weight to dimension x (income, happiness) and 0% to everything else.” I’d rather us be approximately right rather than precisely wrong.
The second way of operationalizing the capability approach is to push things down to the level of individuals and then do a roughly similar kind of exercise. This yields, for example, the Multidimensional Poverty Index.
The third approach, which I personally prefer, is to not even try to make an index but instead to track various clearly important dimensions separately and to try to be open and pragmatic and get lots of feedback from the people “being helped.” If you take this approach and think about it a bit, then you will realize that there are two things that are very important to a very large number of capability sets. Those things are (1) being alive, and (2) having resources, which often means money. The argument for this should be familiar to EAs, as it’s very similar to why some of us think that AI agents might try to not be turned off and to gather resources: These things are generally very useful.
The influence of the capability approach
This leads me to my last point, which is that the capability approach has been so influential in international development thought that many people and organizations do things that make sense under the capability approach even though they may not realize it. The Keynes quote about practical men applies here, though incredibly Amartya Sen is still alive.
The most relevant example of this to EA is in GiveWell and OpenPhil both prioritizing income gains and lives saved as privileged metrics. This has been sometimes criticized, but under the capability approach it makes a lot of sense. If you don’t know precisely what to maximize for people, then picking staying alive and having resources is a very good start. I don’t know if people at these organizations actively read Sen or other related writers, but I think the capability approach offers a powerful defense of this choice. Money and staying alive are not necessarily where you want to end up, but they are very good starting points.
—
Thanks to anyone who read this far. If you want more, Inequality Re-examined by Amartya Sen is quite good. I’d also recommend the capability approach page of the Stanford Encyclopedia of Philosophy, which goes into details on key issues that I glossed over.
—
Minor edit: it was (correctly) pointed out to me on twitter that the capability approach claims “that capabilities are the appropriate space for well-being comparisons and says nothing about whether capabilities should be maximized.” He’s right. My post mixes the capability approach with an implicit EA mindset, but for Sen those would be quite distinct.
- ^
As will become clearer later, the capability approach aims to maximize the number of groups (sets) of capabilities that people can select. Talk of “maximizing capabilities” is lazy shorthand.
- ^
They are options that you really could do. So, for example, given my age and fitness I do not have the capability of being a pro athlete. There is no rule stopping me, but let’s be real, it’s not going to happen.
- ^
The male-female happiness gap in the US then shrunk until the 2000s when men and women in the US were about equally happy. Should you actually believe this result? Maybe not, but please apply any skepticism you feel about this result to all of the other happiness research.
- ^
I’m not an AI person. Please let me use this as an example without people responding with comments about how some inner-optimizer is doing… whatever. Take the point as one about goals, not about training or anything else.
- ^
The approach here would be something like, “maximize value-weighted sets of capabilities, where you can figure out the value of each set based on how people act. Be Bayesian and partially pool information across people that are socially close.” But again, let’s not get hung up on details that aren’t relevant to the broader post. And yes, we’d need to do something about animals, though recognize that “seeing animal x” is a functioning that many people value.
- LLMs for Alignment Research: a safety priority? by 4 Apr 2024 20:03 UTC; 144 points) (LessWrong;
- Prioritising animal welfare over global health and development? by 13 May 2023 9:03 UTC; 112 points) (
- 2023: highlights from the year, from the EA Newsletter by 5 Jan 2024 21:57 UTC; 68 points) (
- Non-utilitarian effective altruism by 29 Jan 2023 6:07 UTC; 42 points) (
- Famine deaths due to the climatic effects of nuclear war by 14 Oct 2023 12:05 UTC; 40 points) (
- Have there been any detailed, cross-cultural surveys into global moral priorities? by 6 Feb 2023 18:16 UTC; 40 points) (
- Which organisation is most effective for the the marginal $ right now? (your opinions) by 26 Aug 2023 11:03 UTC; 31 points) (
- “Making every dollar count,” EA-related episode of In Pursuit of Development (podcast) by 26 Apr 2023 13:37 UTC; 23 points) (
- 27 Sep 2023 14:57 UTC; 21 points) 's comment on Net global welfare may be negative and declining by (
- EA & LW Forum Summaries (9th Jan to 15th Jan 23′) by 18 Jan 2023 7:29 UTC; 17 points) (LessWrong;
- EA & LW Forum Summaries (9th Jan to 15th Jan 23′) by 18 Jan 2023 7:29 UTC; 14 points) (
- The EA Behavioral Science Newsletter #8 (March 2023) by 8 Mar 2023 2:00 UTC; 13 points) (
- 1 Feb 2023 15:45 UTC; 9 points) 's comment on What I thought about child marriage as a cause area, and how I’ve changed my mind by (
- Fake Meat—Real Talk 3: The Capability Approach to Human Welfare by 5 May 2023 14:00 UTC; 9 points) (
- Reaction to “Empowerment is (almost) All We Need” : an open-ended alternative by 25 Nov 2023 15:35 UTC; 9 points) (LessWrong;
- Optionality approach to ethics by 13 Nov 2023 15:23 UTC; 7 points) (LessWrong;
- 1 Feb 2023 18:47 UTC; 4 points) 's comment on What I thought about child marriage as a cause area, and how I’ve changed my mind by (
- 8 Jan 2024 9:25 UTC; 2 points) 's comment on In Pursuit of Practical Ethics: Eudaimonic Utilitarianism with Kantian Priors by (
- Open-ended ethics of phenomena (a desiderata with universal morality) by 8 Nov 2023 20:10 UTC; 1 point) (LessWrong;
Thanks very much for writing this. It’s helpful to have a clear and succinct summary of the capabilities approach on the Forum and I thought the post was constructive and well-written. It provides a nice counterpoint to HLI’s post, To WELLBY or not to WELLBY?
However, the capabilities approach (as you describe it here) strikes me as deeply paternalistic. How do we decide which capabilities to prioritise without asking people how much they value them? We can’t just defer to Nussbaum.
In the post you say:
and also,
To me, it looks like you’ve decided what the priorities should be based on what you think is “clearly important”. But as this post shows, humans are terrible at ‘affective forecasting’ i.e. we underestimate the importance of things that are resistant to hedonic adaptation and difficult to mentally simulate.
The thing is, we don’t have to guess. We can look at subjective wellbeing data from longitudinal studies to identify which capabilities have the most impact on people’s lives. The Origins of Happiness is the best example I’ve seen of this and is packed full of surprising insights. If adversity or discrimination had no effect on your subjective wellbeing, then those terms would be meaningless.
I think the crux of our different views is that I don’t see subjective wellbeing as one of many functionings. Instead, I place high credence on the view that wellbeing is the intrinsic good. Everyone cares about lots of things (positive emotions, achievements, having kids, art, knowledge, freedom, religious belief etc.) but you need to make trade-offs between them and that requires a common unit.
Here, I think you’re confusing emotional states with evaluations of life satisfaction. Most people don’t want to feel happy at a funeral. Instead, we want to be satisfied with our current experience, free from desires for a different state of affairs. When you chose to have kids, I expect you were trading off positive emotions for greater life satisfaction and that’s a totally reasonable thing to do. There’s a great clip of Daniel Kahneman discussing this here.
For me, dissatisfaction and suffering are synonymous so I would prioritise an unhappy billionaire over a happy rural farmer, even though this may seem counterintuitive to many. In practice, however, there are a lot of unhappy rural farmers and it’s much cheaper to help them. The reason I work at the Happier Lives Institute is that I want to understand what will really help them the most, rather than deferring to the common assumption that it must be income gains and lives saved.
(commenting in a personal capacity etc.)
Thanks for these questions.
I think that there are two main points where we disagree: first on paternalism and second on prioritizing mental states. I don’t expect I will convince you, or vice versa, but I hope that a reply is useful for the sake of other readers.
On paternalism, what makes the capability approach anti-paternalistic is that the aim is to give people options, from which they can then do whatever they want. Somewhat loosely (see fn1 and discussion in text), for an EA the capability approach means trying to max their choices. If instead one decides to try to max any specific functioning, like happiness, then you are being paternalistic as you have decided for them what matters. Now you correctly noted that I said that in practice I think increasing income is useful. Importantly, this is not because “being rich” is a key functioning. It is because for poor people income is a factor limiting their ability to do very many things, so increasing their income increases their capabilities quite a bit. The same thing clearly applies to not dying. Perhaps of interest to HLI, I can believe that not being depressed or not having other serious mental conditions is also a very important functioning that unlocks many capabilities.
You wrote that “We can look at subjective wellbeing data from longitudinal studies to identify which capabilities have the most impact on people’s lives.” Putting aside gigantic causal inference questions, which matter, you still cannot identify “which capabilities have the most impact on people’s lives”. At best, you will identify which functionings cause increases in your measured DV, which would be something like a happiness scale. To me, this is an impoverished view of what matters to people’s lives. I will note that you did not respond to my point about getting an AI to maximize happiness or to the point that many people, such as many religious people, will just straight tell you they aren’t trying to maximize their own happiness. I think these arguments makes the point that happiness is important, but it is not the one thing that we all care about.
On purely prioritizing mental states, I think it is a mistake to prioritize “an unhappy billionaire over a happy rural farmer.” I think happiness as the one master metric breaks in all sorts of real-life cases, such as the one that I gave of women in the 1970s. Rather than give more cases, which might at this point be tedious, I think we can productively relate this point back to paternalism. I think if we polled American women and asked if they would want to go back to the social world of the 1970s—when they were on average happier—they would overwhelmingly say no. I think this is because they value the freedoms they gained from the 1970s forward. If I am right that they would not want to go back to the 1970s, then to say that they are mistaken and that life for American women was better in the 1970s is, again, to me paternalistic.
Finally, I should also say thank you for engaging on this. I think the topic is important and I appreciate the questions and criticisms.
Thanks very much for your reply. I agree this topic is important and should be discussed more.
Re: paternalism
I guess all altruistic acts have some element of paternalism (despite our best intentions). I think we both agree that we should give people options to improve their wellbeing, rather than forcing them into something. However, we have to decide which option(s) to provide—increasing income, extending lives, treating mental illness etc. - and this is where we differ. You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.
Re: mental states
In the case of polling women, the results would be subject to all of the affective forecasting biases I mentioned before. To avoid paternalism, we should let the data speak for itself. If women in the 1970s said they were 8⁄10 and women in the 2020s say they are 7⁄10 (I’m using made-up numbers here) then we should try to identify the cause(s) of that decline rather than dismiss the data on the assumption that life for women is clearly better than it used to be. Some things have clearly improved, but those improvements might be cancelled out by other factors which are not immediately obvious.
Re: AI maximising happiness
You said: “We’ll all end up on some IV drug drip or the equivalent, and to me that’s a nightmare.” but that’s a very speculative claim. Personally, I would be very surprised if a happiness-maximising AI would put you in a situation that you perceived as a nightmare.
Re: religion
Here, I will defer to a blog post written by my colleague, Samuel Dupret, who thinks very deeply about this question.
Just to explain why I downvoted this comment. I think it is pretty defensive and not really engaging with the key points of the response, which made no indication that would justify a conclusion like: „You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.“
There is nothing in the capability approach as explained that would keep you from using survey data to consider which options to provide. On the opposite, I would argue it to be more open and flexible for such an approach because it is less limited in the types of questions to ask in such surveys. The capability approach simply highlights that life satisfaction or wellbeing are not necessarily the only measures that can be used. For instance, you could also ask what functionings provide meaning to your life, which may be correlated to life satisfaction but not necessarily the same thing (e.g., see examples that were given).
On paternalism, just a note to point out that unlike Nussbaum, Sen and others have resisted offering specific capabilities, the idea being that these should not be handed down by economists but democratically derived. (I’m not sure how workable this is in practice or to what extent it’s been tried, would be interested if anyone knows more!)
That’s good to know, thanks for clarifying. A democratic process is definitely better than a top-down approach, but everyone who participates in that process will be subject to affective forecasting biases too. That’s why I favour using subjective wellbeing data, but I’m keen to hear about alternative options too.
One trouble I’ve always had with the capabilities approach is with how one figures out what counts as a capability worth having. For example, I agree it’s good for people to be able to choose their career and to walk outside safely at night. But it seems to me like this is precisely because people generally have strong preferences about what career to have and about their safety. If there was a law restricting people from spinning in a circle and clapping one’s hands exactly 756 times, this would be less bad than restricting people from walking outside at night, and there’s a simple preference-satisfaction explanation for this. What would be the capabilities approach explanation for this?
It also seems odd to me that capabilities would matter intrinsically. That is, it doesn’t seem intrinsically important to me that people are merely able to satisfy their preferences. It seems more important that their preferences are actually satisfied.
Good questions.
I tried to address the fist one in the second part of the Downsides section. It is indeed the case that while the list of capability sets available to you is objective, your personal ranking of them is subjective and the weights can vary quite a bit. I don’t think this problem is worse than the problems other theories face (turns out adding up utility is hard), but it is a problem. I don’t want to repeat myself too much, but you can respond to this by trying to make a minimal list of capabilities that we all value highly (Nussbaum), or you can try to be very contextual (within a society or subgroup of a society, the weights may not be so different), or you can try to find minimal things that unlock lots of capabilities (like income or staying alive). There may be other things one can do too. I’d say more research here could be very useful. This approach is very young.
Re: actually satisfying preferences, if my examples about the kid growing up to be a doctor or the option to walk around at night don’t speak to you, then perhaps we just have different intuitions. One thing I will say on this is that you might think that your preferences are satisfied if the set of options is small (you’ll always have a top choice, and you might even feel quite good about it), but if the set grows you might realize that the old thing you were satisfied with is no longer what you want. You’ll only realize this if we keep increasing the capability sets you can pick from, so it does seem to me that it is useful to try to maximize the number of (value-weighted) capability sets available to people.
I think the capabilities approach can be reframed as a form of multi-level utilitarianism. Capabilities matter, but why? Because they contribute to well-being. How do we prioritize among capabilities? Ask people what capabilities matter to them and prioritize the ones that matter more to more people.[1] Why do we prioritize the ones that matter more to more people? Because they have a greater impact on aggregate well-being. Here, we’re using various decision procedures that differ from the archetypal utilitarian calculus (e.g. the capabilities approach, soliciting people’s preferences), but the north star is still aggregate utility.
From the OP: “The third approach, which I personally prefer, is to not even try to make an index but instead to track various clearly important dimensions separately and to try to be open and pragmatic and get lots of feedback from the people ‘being helped.’”
I think that for consequentialists, capability-maximization would fall into the same sphere as identifying and agitating for better laws, social rules, etc. Despite not being deontologists, sophisticated consequentialists recognize the importance of deontological-type structures, and thinking in terms of capabilities (which seem similar to rights, maybe negative rights in some cases like walking at night) might be useful in the same way that human rights are useful—as a tool to clarify one’s general goals and values and interpersonally coordinate action.
Thanks for writing this—I really enjoyed it.
Another similar point drawn from Sen is that an exclusive SWB focus deprioritizes providing resources to people who are better able to psychologically adapt to bad circumstances.
I think the example he uses is a person who has been poor all of their life (and therefore used to it) vs. a person who recently became poor, who might subjectively feel worse about it. He believes that it’s counterintuitive to say that we should prioritize allocating resources to the second person over the first person.
That first point could be rephrased as “an exclusive SWB focus prioritizes providing resources to people who are less able to psychologically adapt to bad circumstances.”. That seems like a good approach to me.
In the example you give, I’m not sure the previous circumstances are relevant to the question. In that situation, I would prioritise the person who was suffering the most (all else equal).
Thanks for posting this! I do think lots of people in EA take a more measuring-happiness/preference-satisfaction approach, and it’s really useful to offer alternatives that are popular elsewhere.
My notes and questions on the post:
Here’s how I understand the main framework of the “capability approach,” based mostly on this post, the linked Tweet, and some related resources (including SEP and ChatGPT):[1]
“Freedom to achieve [well-being]” is the main thing that matters from a moral perspective.
(This post then implies that we should focus on increasing people’s freedom to achieve well-being / we should maximize (value-weighted) capabilities.)
“Well-being” breaks down into functionings (stuff you can be or do, like jogging or being a parent) and capabilities (the ability to realize a functioning: to take some options — choices)
Examples of capabilities: having the option of becoming a parent, having the option of not having children, having the option of jogging, having the option of not jogging, etc. Note: if you live in a country where you’re allowed to jog, but there are no safe places to jog, you do not actually have the capability to jog.
Not all functionings/capabilities are equal: we shouldn’t naively list options and count them. (So e.g. the ability to spin and clap 756 times is not the same as the option to have children, jog, or practice a religion.) My understanding is that the capability approach doesn’t dictate a specific approach to comparing different capabilities, and the post argues that this is a complexity that is just a fact of life that we should accept and pragmatically move forward with:
“Yes, it’s true that people would rank capability sets differently and that they’re very high dimensional, but that’s because life is actually like this. We should not see this and run away to the safety of clean (and surely wrong) simple indices. Instead, we should try to find ways of dealing with this chaos that are approximately right.”
In particular, even if it turns out that someone is content not jogging, them having the ability to jog is still better than them not having this ability.
My understanding of the core arguments of the post, with some questions or concerns I have (corrections or clarifications very much appreciated!):
What the “capability approach” is — see above.
Why this approach is good
It generally aligns with our intuitions about what is good.
I view this as both a genuine positive, and also as slightly iffy as an argument — I think it’s good to ground an approach in intuitions like “it’s good for a woman to choose whether to walk at night even if she might not want to”, but when we get into things like comparing potential areas of work, I worry about us picking approaches that satisfy intuitions that might be wrong. See e.g. Don’t Balk at Animal-friendly Results, if I remember that argument correctly, or just consider various philanthropic efforts that focus on helping people locally even if they’re harder to help and in better conditions than people who are farther away — I think this is generally justified with things like “it’s important to help people locally,” which to me seems like over-fitting on intuitions.
At the same time, the point about women being happier than men in the 1970s in the US seems compelling. Similarly, I agree that I don’t personally maximize anything like my own well-being — I’m also “a confused mess of priorities.”
It’s safer to maximize capabilities than it is to maximize well-being (directly), which both means that it’s safer to use the capabilities approach and is a signal that the capabilities approach is “pointing us in the right direction.”
A potentially related point that I didn’t see explicitly: this approach also seems safer given our uncertainty about what people value/what matters. This is also related to 2d.
This approach is less dependent on things like people’s ability to imagine a better situation for themselves.
This approach is more agnostic about what people choose to do with their capabilities, which matters because we’re diverse and don’t really know that much about the people we’re trying to help.
This seems right, but I’m worried that once you add the value-weighting for the capabilities, you’re imposing your biases and your views on what matters in a similar way to other approaches to trying to compare different states of the world.
So it seems possible that this approach is either not very useful by saying: “we need to maximize value-weighted capabilities, but we can’t choose the value-weightings,” (see this comment, which makes sense to me) or transforms back into a generic approach like the ones more commonly used often in EA — deciding that there are good states and trying to get beings into those states (healthy, happy, etc.). [See 3bi for a counterpoint, though.]
Some downsides of the approach (as listed by the post)
It uses individuals as the unit of analysis and assumes that people know best what they want, and if you dislike that, you won’t like the approach. [SEE COMMENT THREAD...]
I just don’t really see this as a downside.
“A second downside is that the number of sets of capabilities is incredibly large, and the value that we would assign to each capability set likely varies quite a bit, making it difficult to cleanly measure what we might optimize for in an EA context.”
The post argues that we can accept this complexity and move forward pragmatically in a better way than going with clean-but-wrong indices. It lists three examples (two indcies and one approach of tracking individual dimensions) that “start with the theory of the capability approach but then make pragmatic concessions in order to try to be approximately right.” These seem to mostly track things that seem like common requirements for many other capabilities, like health/being alive, resources, education, etc.
The influence of the capability approach
Three follow-up confusions/uncertainties/questions (beyond the ones embedded in the summary above):
Did I miss important points, or get something wrong above?
If we just claim that people value having freedoms (or freedoms that will help them achive well-being), is this structurally similar to preference satisfaction?
The motivation for the approach makes intuitive sense to me, but I’m confused about how this works with various things I’ve heard about how choices are sometimes bad. (Wiki page I found after a quick search, which seemed relevant after a skim.) (I would buy that a lot of what I’ve heard is stuff that’s failed in replications, though.)
Sometimes I actually really want to be told, “we’re going jogging tonight,” instead of being asked, “So, what do you want to do?”
My guess is that these choices are different, and there’s something like a meta-freedom to choose when my choice gets taken away? But it’s all pretty muddled.
I don’t have a philosophy background, or much knowledge of philosophy!
Thank you for this excellent summary! I can try to add a little extra information around some of the questions. I might miss some questions or comments, so do feel free to respond if I missed something or wrote something that was confusing.
--
On alignment with intuitions as being “slightly iffy as an argument”: I basically agree, but all of these theories necessarily bottom out somewhere and I think they all basically bottom out in the same way (e.g. no one is a “pain maximizer” because of our intuitions around pain being bad). I think we want to be careful about extrapolation, which may have been your point in the comment, because I think that is where we can either be overly conservative or overly “crazy” (in the spirit of the “crazy train”). Best I can tell where one stops is mostly a matter of taste, even if we don’t like to admit that or state it bluntly. I wish it was not so.
--
I understand what you’re saying. As was noted in a comment, but not in my post, Sen in particular would advocate for a process where relatively small communities worked out for themselves which capabilities they cared most about and the ordering of the sets. This would not aggregate up into a global ordered list, but it would allow for prioritization within practical situations. If you want to depart from Sen but still try to respect the approach when doing this kind of weighting, one can draw on survey evidence (which is doable and done in practice).
--
I don’t think I have too much to add to 3bi or the questions around “does this collapse into preference satisfaction?”. I agree that in many places this approach will recommend things that look like normal welfarism. However, I think it’s very useful to remember that the reason we’re doing these things is not because we’re trying to maximize happiness or utility or whatnot. For example, if you think maximizing happiness is the actual goal then it would make sense to benchmark lots of interventions on how effectively they do this per dollar (and this is done). To me, this is a mistake borne out of confusing the map for the territory. Someone inspired by the capability approach would likely track some uncontentiously important capabilities (life, health, happiness, at least basic education, poverty) and see how various interventions impact them and try to draw on evidence from the people affected about what they prioritize (this sort of thing is done).
Something I didn’t mention in the post that will also be different from normal welfarism is that the capability approach naturally builds in the idea that one’s endowments (wealth, but also social position, gender, physical fitness, etc) interact with the commodities they can access to produce capabilities. So if we care about basic mobility (e.g. the capability to get to a store or market to buy food) then someone who is paraplegic and poor and remote will need a larger transfer than someone who is able bodied but poor and remote in order to get the same capability. This idea that we care about comparisons across people “in the capability space” rather than “in the money space” or “in the happiness space” can be important (e.g. it can inform how we draw poverty lines or compare interventions) and it is another place where the capability approach differs from others.
All that said, I agree that in practice the stuff capability-inspired people do will often not look very different from what normal welfarism would recommend.
--
Related: you asked “If we just claim that people value having freedoms (or freedoms that will help them achieve well-being), is this structurally similar to preference satisfaction?”
I think this idea is similar to this comment and I think it will break for similar meta-level reasons. Also, it feels a bit odd to me to put myself in a preference satisfaction mindset and then assert someone’s preferences. To me, a huge part of the value of preference satisfaction approaches is that they respect individual preferences.
--
Re: paradox of choice: If more choices are bad for happiness then this would be another place where the capability approach differs from a “max happiness” approach, at least in theory. In practice, one might think that the practical results of limiting choices is likely to usually be bad (who gets to make the limits? how?, etc.) and so this won’t matter. I personally would bet against most of those empirical results mattering. I have large doubts that they would replicate in their original consumer choice context, and even if they do replicate I have doubts that they would apply to the “big” things in life that the capability approach would usually focus on. But all that said, I’m very comfortable with the idea that this approach may not max happiness (or any other single functioning).
On the particular example of: “Sometimes I actually really want to be told, “we’re going jogging tonight,” instead of being asked, “So, what do you want to do?”″
Yeah, I’m with you on being told to exercise. I’m guessing you like this because you’re being told to do it, but you know that you have the option to refuse. I think that there are lots of cases where we like this sort of thing, and they often seem to exist around base appetites or body-related drives (e.g. sex, food, exercise). To me, this really speaks to the power of capabilities. My hunch is you like being told “you have to go jogging” when you know that you can refuse but you’d hate it if you were genuinely forced to go jogging (if you genuinely lacked the option to say no).
--
Again, thank you for such a careful read and for offering such a nice summary. In a lot of places you expressed these ideas better than I did. I was fun to read.
Great, thank you! I appreciate this response, it made sense and cleared some things up for me.
Re:
I think you might be right, and this is just something like the power of defaults (rather than choices being taken away). Having good defaults is good.
(Also, I’m curating the post; I think more people should see it. Thanks again for sharing!)
I really liked this post. I think that the author raises a good point at the end, saying that for practical purposes, all these different paradigms basically lead to wanting to keep people alive and getting the ones who have less money more money.
But I found this to be a much more intuitive framework for me personally: from a systemic change standpoint it’s probably easier/cheaper to increase the mental health of rich people who have poor mental health, but more difficult to increase the wellbeing of poor people who are suffering a similar amount. Maybe not based on the number of dollars you give out to two individuals in these separate situations, but based on the amount of effort it would take to systemically prevent or reduce the prevalence of the type of suffering. I like that the capabilities approach takes a strong stand that systemic change to reduce poverty is worth doing even if it’s really hard, and worth prioritizing over e.g. rich people with poor mental health.
Great post, thank you for writing it! I had never heard of this approach before and I think it’s very interesting. Strong upvote. While I don’t agree it’s better than maximizing happiness or preference satisfaction, I do think it’s a valuable way of looking at things and could perhaps be used instrumentally in some cases.
My biggest disagreement is that I don’t see how capabilities can be an intrinsic value. When I talk about maximizing utility or happiness, I mean maximizing mental states people prefer to be in (this is not the same as preference satisfaction). I got this from Derek Parfit, though I might have misunderstood him. Under this definition, watching a sad movie can be great as it’s a mental state you want to be in. I do think “women being happier despite having less options” is an interesting case, but women would still likely prefer to be in a mental state where they have more options.
In both the case of the children and the walking safe at night: Imagine we lived in a world where every child coincidentally liked to follow the path their parents push them to or imagine we live in a world where women don’t mind staying at home at all rather than walk at night. Would it really matter then if the options aren’t there? It’s because most people want to have those options in our world that it’s important to have them.
Thank you for the kind words. I’m a little confused by this sentence:
I’ll try to respond to what I see as the overall question of the comment (feel free to correct me if I didn’t get it): If we assume someone has the thing they like in their choice set, why is having more choices good?
I think there are two answers to this, theoretical and pragmatic. I struggle sometimes to explain the theoretical one because it just seems blindingly obvious to me. That isn’t an argument, but it’s just to say that my intuitions lean heavily this way so I sometimes struggle to explain it. I think that a full human life is one where we are able to make important choices for ourselves. That, by definition, means that we need more than 1 choice, even if the 1 choice in the set is the one you will pick regardless. I think that this also scales up to more choices. Perhaps one way to frame this is to say that the journey to the choice is valuable. A world where you never got choices but always were forced to select the thing that you would pick anyways is a bad world to me.
The pragmatic answer applies even if you don’t buy the intrinsic value of options in theory. All that you need to value the pragmatic part is to be anti-paternalistic and to think that people are quite heterogeneous in what they’re trying to do in life. If you buy those two things, then you’re going to want to strive to give people options rather than to max some index of the one key functioning, because if you give people more choices then these quite different people can all go off and do quite different things.
I will also say, this is not at all my area of research so if you find this interesting then consider the readings at the end of my post.
Yeah I get that it’s difficult to explain these intuitions, I was struggling in my original comment a lot too.
I meant to say that the study saying women were happier than men despite having fewer options perhaps doesn’t capture what I value. They might be happier in the narrow, convential way happiness is defined but not in the broad way I define happiness (mental states you prefer being in). But I could be wrong about this.
I don’t have much to say about the rest of your comment other than that I think it’s interesting.
Don’t people also have preferences for having more options?
I believe that’s generally outside the model. It’s like asking if people have preferences about the ranking of their preferences.
I found this an interesting framing, thank you! I hadn’t heard of the multidimensional poverty index before.
(1) Do you know how widely this measure is currently being used in e.g. development research, charity evaluation? I was kind of surprised at how specific some of the components of the index are (e.g. I imagine the below is kind of hard to straight forwardly calculate based on past surveys—not sure if all of these questions are standard to ask?).
(2) Minor point: I wonder if you will reach more of your intended audience by changing the title of this post to “The Capability Approach (to Improving Human Welfare)” or something. I initially pattern matched the word “capability” in this title onto something about AI, since I think on the EA forum folks talk more about capability in terms of AI systems than anything else.
Glad the post was useful.
The capability approach is widely used as a “north star” guiding much development practice and research. Amartya Sen has been very influential in this community. The MDPI is pretty new, but it has at least a little purchase at at the UNDP and World Bank, among others. It’s probably worth at this point reiterating that I’d say the MDPI is “capability inspired” rather than “the capability approach in practice.”
On the question of data: it’s the opposite. I’m quite confident that those questions were chosen because we have a reasonable amount of cross-country historical data on them thanks to the DHS surveys.
Title has been updated. Thank you! This didn’t occur to me.
Thanks for writing this post :)
If anyone wants to read more (but finds SEP too dense) IEP has a page on Sen’s capability approach as well.
Thanks for writing this! It’s a well-written introduction and it’s an approach that should be more widely known + highly rated in EA.
Another useful application of the capability approach I’ve encountered is in health. While saving lives is simple to value via lots of approaches, it’s more difficult to know how to weigh disability and disease. The QALY/DALY approach is a useful measurement tool, but I find it helpful to have a theory for why we should care about disability/disease beyond just the lens of QALY/DALYs. Venkatapuram (2011) defends a notion of health as a cluster of basic capabilities—and I find that a really useful foundation to think from.
While the capability approach definitely has some upsides; such as that it measures wellbeing in terms of people’s positive freedom (rather than simply being not infringed upon by others, people are only “free” if they have meaningful opportunities available to them), one downside of this approach is that it still has similar problems to other utilitarian metrics if the goal is to maximise wellbeing. For example, even with regards to discrimination, if the people doing the discriminating gained more capabilities than were lost by those being discriminated against, then the discrimination would be justified. One would still need to have a harm cap that states that when any one person or group of people lose enough capabilities, then no such actions are justified even if there is a net increase in capabilities.
Also, I think the problem associated with traditional methods of measuring wellbeing (e.g., happiness, SWB) where they don’t align with people’s priorities can be solved if the metric being measured is personal meaning: even if having children, believing in a religion, or viewing pieces of art that evoke sadness don’t necessarily maximise happiness, they can all facilitate feelings of subjective meaning in individuals. That being said, this still isn’t perfect, as the AI example could just be about using a different drug that maximises different neurotransmitters like serotonin or oxytocin rather than simply dopamine.
Thanks so much for this interesting post—this framing of wellbeing had never occurred to me before. On the first example you use to explain why you find the capabilities framing to be more intuitive than a preference framing: can’t we square your intuition that the second child’s wellbeing is better with preference satisfaction by noting that people often have a preference to have the option to do things they don’t currently prefer? I think this preference comes from (a) the fact that preferences can change, so the option value is instrumentally useful, and (b) it feels better to do what you’d prefer freely than to do so with no other option. You can account for the second example the same way.
The example of women reporting to be happier in the 70s (which lets take to be true for the sake of argument) is interesting, but for me that’s a point against hedonic accounts of wellbeing, not preference accounts of wellbeing: our happiness is just one (albeit very very important!) thing we care about. So whilst women might have been happier in the US in the 70s, they may well have had preferences thwarted by discrimination… and even if their preferences were satisfied as in your first two examples (e.g. suppose they preferred playing the economic role they were forced by discrimination to play) they presumably would have preferred to have more options, and to choose freely.
I don’t purport to have shown why the capabilities account of wellbeing is wrong, but rather to show why I’m not convinced that the holes (in the preference account of wellbeing) it’s supposed to fill, exist in the first place.
[Sorry for the scrappy writing—written on my phone whilst walking]
Glad you found this interesting, and you have my sympathies as another walking phone writer.
A few people have asked similar comments about preference structures. I can give perhaps a sharper example that addresses this specific point. I left a lot out of my original post in the interest of brevity, so I’m happy to expand more in the comments.
Probably the sharpest example I can give of a place where a capability approach separates from preference satisfaction is over the issue of adaptive preferences. This is an extended discussion, but the gist of it is that it seems not so hard to come up with situations where the people in a given situation do not seem upset by some x even though upon reflection (or with full/better information), they might well be upset. There is ample space for this in the capability approach, but there is not in subjective preference satisfaction. This point is similar in spirit to my women in the 1970s example, and similar to where I noted in the text that “Using subjective measures to allocate aid means that targeting will depend in part on people’s ability to imagine a better future (and thus feel dissatisfaction with the present).” The chapter linked above gives lots of nice examples and has good discussion.
If you want a quick example: consider a case where women are unhappy because they lack the right to vote. In the capability approach, this can only be addressed in one way, which is to expand their capability to vote. In preference satisfaction or happiness approaches, one could also do that, or one could shape the information environment so that women no longer care about this and this would fix the problem of “I have an unmet preference to vote” and “I’m unhappy because I can’t vote.” I prefer how the capability approach handles this. The downside to the way the capability approach handles this is that even if the women were happy about not voting the capability approach would still say “they lack the capability to vote” and would suggest extending it (though of course the women could still personally not exercise that option, and so not do the functioning vote).
Hope that helps to make some of these distinctions sharper. Cheers.